109 research outputs found

    Fermion Dark Matter with Scalar Triplet at Direct and Collider Searches

    Full text link
    Fermion dark matter (DM) as an admixture of additional singlet and doublet vector like fermions provides an attractive and allowed framework by relic density and direct search constraints within TeV scale, although limited by its discovery potential at the Large Hadron Collider (LHC). An extension of the model with scalar triplet can yield neutrino masses and provide some cushion to the direct search constraint of the DM through pseudo-Dirac mass splitting. This in turn, allow the model to live in a larger region of the parameter space and open the door for detection at LHC, even if slightly. The model however can see an early discovery at International Linear Collider (ILC) without too much of fine-tuning. The complementarity of LHC, ILC and direct search prospect of this framework is studied in this paper.Comment: 55 pages, 28 figures, version accepted in PR

    TOWARDS BUILDING GENERALIZABLE SPEECH EMOTION RECOGNITION MODELS

    Get PDF
    Abstract: Detecting the mental state of a person has implications in psychiatry, medicine, psychology and human-computer interaction systems among others. It includes (but is not limited to) a wide variety of problems such as emotion detection, valence-affect-dominance states prediction, mood detection and detection of clinical depression. In this thesis we focus primarily on emotion recognition. Like any recognition system, building an emotion recognition model consists of the following two steps: 1. Extraction of meaningful features that would help in classification 2. Development of an appropriate classifier Speech data being non-invasive and the ease with which it can be collected has made it a popular candidate for feature extraction. However, an ideal system designed should be agnostic to speaker and channel effects. While feature normalization schemes can counter these problems to some extent, we still see a drastic drop in performance when the training and test data-sets are unmatched. In this dissertation we explore some novel ways towards building models that are more robust to speaker and domain differences. Training discriminative classifiers involves learning a conditional distribution p(y_i|x_i), given a set of feature vectors x_i and the corresponding labels y_i; i=1,...N. For a classifier to be generalizable and not overfit to training data, the resulting conditional distribution p(y_i|x_i) is desired to be smoothly varying over the inputs x_i. Adversarial training procedures enforce this smoothness using manifold regularization techniques. Manifold regularization makes the model’s output distribution more robust to local perturbation added to a datapoint x_i. In the first part of the dissertation, we investigate two training procedures: (i) adversarial training where we determine the perturbation direction based on the given labels for the training data and, (ii) virtual adversarial training where we determine the perturbation direction based only on the output distribution of the training data. We demonstrate the efficacy of adversarial training procedures by performing a k-fold cross validation experiment on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) and a cross-corpus performance analysis on three separate corpora. We compare their performances to that of a model utilizing other regularization schemes such as L1/L2 and graph based manifold regularization scheme. Results show improvement over a purely supervised approach, as well as better generalization capability to cross-corpus settings. Our second approach to better discriminate between emotions leverages multi-modal learning and automated speech recognition (ASR) systems toward improving the generalizability of an emotion recognition model that requires only speech as input. Previous studies have shown that emotion recognition models using only acoustic features do not perform satisfactorily in detecting valence level. Text analysis has been shown to be helpful for sentiment classification. We compared classification accuracies obtained from an audio-only model, a text-only model and a multi-modal system leveraging both by performing a cross-validation analysis on IEMOCAP dataset. Confusion matrices show it’s the valence level detection that is being improved by incorporating textual information. In the second stage of experiments, we used three ASR application programming interfaces (APIs) to get the transcriptions. We compare the performances of multi-modal systems using the ASR transcriptions with each other and with that of one using ground truth transcription. This is followed by a cross-corpus study. In the third part of the study we investigate the generalizability of generative of generative adversarial networks (GANs) based models. GANs have gained a lot of attention from machine learning community due to their ability to learn and mimic an input data distribution. GANs consist of a discriminator and a generator working in tandem playing a min-max game to learn a target underlying data distribution; when fed with data-points sampled from a simpler distribution (like uniform or Gaussian distribution). Once trained, they allow synthetic generation of examples sampled from the target distribution. We investigate the applicability of GANs to get lower dimensional representations from the higher dimensional feature vectors pertinent for emotion recognition. We also investigate their ability to generate synthetic higher dimensional feature vectors using points sampled from a lower dimensional prior. Specifically, we investigate two set ups: (i) when the lower dimensional prior from which synthetic feature vectors are generated is pre-defined, (ii) when the distribution of lower dimensional prior is learned from training data. We define the metrics that we used to measure and analyze the performance of these generative models in different train/test conditions. We perform cross validation analysis followed by a cross-corpus study. Finally we make an attempt towards understanding the relation between two different sub-problems encompassed under mental state detection namely depression detection and emotion recognition. We propose approaches that can be investigated to build better depression detection models by leveraging our ability to recognize emotions accurately

    Efficient Image Fusion Using DWT

    Get PDF
    The method of combining important details from two or more source images into a final fused image is known as Image Fusion. When compared to any of the other input images, our fused output image will have more detailed information in it. The objective of image fusion is to obtain the most desirable data from each image. Multi sensor image fusion algorithm based on three different fusion techniques have been discussed in this paper. Those are “Pixel Level Iteration”, “Directional Discrete Cosine Transform (DDCT)”, and “Discrete Wavelet Transform (DWT)”. The outcomes are additionally outfitted in picture and table organization for near examination of above methods. This paper shows the three distinctive picture combination strategies and there relative results, as the routine combination methods Direct Pixel Iteration and Discrete Cosine Transform has a few downsides. The similar study presumes that Discrete Wavelet Transform is one of the best and most effective algorithm for Image Fusion. In this thesis, Discrete Wavelet Transform based two calculations are proposed, these are Maximum Intensity Replacement and Band Averaging Method

    On Enhancing Speech Emotion Recognition using Generative Adversarial Networks

    Full text link
    Generative Adversarial Networks (GANs) have gained a lot of attention from machine learning community due to their ability to learn and mimic an input data distribution. GANs consist of a discriminator and a generator working in tandem playing a min-max game to learn a target underlying data distribution; when fed with data-points sampled from a simpler distribution (like uniform or Gaussian distribution). Once trained, they allow synthetic generation of examples sampled from the target distribution. We investigate the application of GANs to generate synthetic feature vectors used for speech emotion recognition. Specifically, we investigate two set ups: (i) a vanilla GAN that learns the distribution of a lower dimensional representation of the actual higher dimensional feature vector and, (ii) a conditional GAN that learns the distribution of the higher dimensional feature vectors conditioned on the labels or the emotional class to which it belongs. As a potential practical application of these synthetically generated samples, we measure any improvement in a classifier's performance when the synthetic data is used along with real data for training. We perform cross-validation analyses followed by a cross-corpus study.Comment: 5 pages, Accepted to Interspeech, Hyderabad-201

    Clinico-hematological profile of paediatric patient admitted with acute leukemia in tertiary care centre of central India

    Get PDF
    Background: Leukemia is the most prevalent childhood cancer. Acute lymphoblastic leukemia (ALL) constitutes 75% of allcases. Objective: To find out the most common clinical and hematological findings of pediatric patients with acute leukemia at atertiary care center of central India. Materials and Methods: This retrospective study was done on 30 pediatric patients diagnosedwith acute leukemia in the Department of Pediatrics and Oncology at Chirayu Medical College and Hospital, Bhopal. This studyincluded children aged from 6 months to 15 years, who were admitted from June 2014 to June 2015. Data were retrospectivelycollected by reviewing medical records of these patients. Clinical history, physical examination, hematological, and radiologicaldata were analyzed. Results: ALL was the most common hematological malignancy observed at our hospital. In addition, it wasfound to be more prevalent in males and fever was the most common presenting symptoms followed by fatigue and anorexia.Hepatosplenomegaly and pallor were the most common findings on clinical examination. Among patients with ALL, subtype L1was the most common type. Among patients with acute myeloid leukemia, M2 and M3 subtypes were most commonly documented
    corecore